Introduction A Decision tree is a flow chart-like graph structure of Decision nodes, branches, and leaf nodes starting from the root and ending at the leaves. The node is the independent variable on which the test is performed, and the root node is placed at the top of the tree, while the leaf is the dependent variable (answer) or category label, which is the last node of the tree. The Decision at each stage of tree construction depends on the previous branching operation, which is crucial to the predictive capability of the tree. The branching method of information gain, by using the concept of entropy, measures how much information a feature provides about a class and decides to split the tree at each node. Material and Methods The first dataset describes the diagnosis of cardiac Single Proton Emission Computed Tomography (SPECT) image contains 267 SPECT image sets (patients), and 22 Attributes such that each of the patients is classified into two categories: normal and abnormal. The second data set contains 1024 binary attributes (molecular fingerprints) used to classify 1687 chemicals into two classes (binder to androgen receptor/positive, non-binder to androgen receptor /negative). Also, a real-world dataset used that contains 90 instances and 7 attributes include gender, blood pressure, blood sugar, cholesterol, smoking, weight and occupation of the patient, which were collected to predict the determination of the treatment method (medical treatment or angiography). A new approach is proposed to produce a Decision tree based on the T-entropy criterion. The method applied on the three datasets, examined by 11 evaluation criteria and compared with the well-known methods of Gini index, Shannon, Tsallis, and Renyi entropies for splitting the Decision tree, with a proportion of 75 % for training and 25 % for testing and for 300 times of execution (each time of execution leads to the production of a Decision tree). Also, a comparison is made between T-entropy and other discussed splitting measures in terms of the area under the ROC curve (AUC). Results and Discussion The performance of splitting methods of the Gini index, Shannon, Tsallis, Renyi entropies and T-entropy are examined. The evaluation criteria of accuracy (A, , ), sensitivity, specificity, positive predictive value (PPV)(or precision), F-S, ore index (F١, ), negative predictive value (NPV), false discovery rate (F, R), false positive rate (FPR), false negative rate (FNR) and Mean square error for the three data sets were calculated. The maximum values for the first six criteria and the minimum values for the second four criteria indicate the better performance of the Decision tree based on the introduction method. Also, the AUC value of the t-entropy method presented for all three data sets is higher than other methods, which indicates that the value of the true positive rate is higher than the false positive rate in the t-entropy method compared to other methods. Conclusion The results suggest that the proposed node-splitting method based on Tentropy has better behaviour than the other discussed methods for both low and high numbers of samples. Also, today because of the increasing growth of the big data problem and, on the other hand, the superiority of the Tentropy splitting method over the other investigated methods on different sizes of the dataset, the benefit of this method is twofold.